The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.
In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.
To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.
According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.
Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.
Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.
CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.
By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.
Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.
The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.
While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.
The breach happened in EEOC's Public Portal system where unauthorized access of agency data may have disclosed personal data in logs given to agency by the public. “Staff employed by the contractor, who had privileged access to EEOC systems, were able to handle data in an unauthorized (UA) and prohibited manner in early 2025,” reads the EEOC email notification sent by data security office.
The email said that the review suggested personally identifiable information (PII) may have been leaked, depending on the individual. The exposed information may contain names, contact and other data. The review of is still ongoing while EOCC works with the law enforcement.
EOCC has asked individuals to review their financial accounts for any malicious activity and has also asked portal users to reset their passwords.
Contracting data indicates that EEOC had a contract with Opexus, a company that provides case management software solutions to the federal government.
Open spokesperson confirmed this and said EEOC and Opex “took immediate action when we learned of this activity, and we continue to support investigative and law enforcement efforts into these individuals’ conduct, which is under active prosecution in the Federal Court of the Eastern District of Virginia.”
Talking about the role of employees in the breach, the spokesperson added that “While the individuals responsible met applicable seven-year background check requirements consistent with prevailing government and industry standards at the time of hire, this incident made clear that personnel screening alone is not sufficient."
The second Trump administration's efforts to prevent claimed “illegal discrimination” driven by diversity, equity, and inclusion programs, which over the past year have been examined and demolished at almost every level of the federal government, centre on the EEOC.
Large private companies all throughout the nation have been affected by the developments. In an X post this month, EEOC chairwoman Andrea Lucas asked white men if they had experienced racial or sexual discrimination at work and urged them to report their experiences to the organization "as soon as possible.”
“We're already seeing traditional boundaries blur- payments, lending, embedded finance, and banking capabilities are coming closer together as players look to build more integrated and efficient models. While payments continue to be powerful for driving access and engagement, long-term value will come from combining scale with operational efficiency across the financial stack,” said Ramki Gaddapati, Co-Founder, APAC CEO and Global CTO, Zeta.
India’s fintech industry is preparing to enter 2026 with a new Artificial intelligence (AI) emerging as a critical tool in this transformation, helping firms strengthen fraud detection, streamline regulatory processes, and enhance customer trust.
According to Reserve Bank of India (RBI) data, digital payment volumes crossed 180 billion transactions in FY25, powered largely by the Unified Payments Interface (UPI) and embedded payment systems across commerce, mobility, and lending platforms.
Yet, regulators and industry leaders are increasingly concerned about operational risks and fraud. The RBI, along with the Bank for International Settlements (BIS), has highlighted vulnerabilities in digital payment ecosystems, urging fintechs to adopt stronger compliance frameworks. A
Artificial intelligence is set to play a central role in this compliance-first era. Fintech firms are deploying AI to:
Detect and prevent fraudulent transactions in real time
Automate compliance reporting and monitoring
Personalize customer experiences while maintaining data security
Analyze risk patterns across lending and investment platforms
The sector is also diversifying beyond payments. Fintechs are moving deeper into credit, wealth management, and banking-related services, areas that demand stricter oversight. It allows firms to capture new revenue streams and broaden their customer base but exposes them to heightened regulatory scrutiny and the need for more robust governance structures.
“The DPDP Act is important because it protects personal data and builds trust. Without compliance, organisations face penalties, data breaches, customer loss, and reputational damage. Following the law improves credibility, strengthens security, and ensures responsible data handling for sustained business growth,” said Neha Abbad, co-founder, CyberSigma Consulting.